Establishing open and general benchmarks has been a critical driving force behind the success of modern machine learning techniques. As machine learning is being applied to broader domains and tasks, there is a need to establish richer and more diverse benchmarks to better reflect the reality of the application scenarios. Graph learning is an emerging field of machine learning that urgently needs more and better benchmarks. To accommodate the need, we introduce Graph Learning Indexer (GLI), a benchmark curation platform for graph learning. In comparison to existing graph learning benchmark libraries, GLI highlights two novel design objectives. First, GLI is designed to incentivize \emph{dataset contributors}. In particular, we incorporate various measures to minimize the effort of contributing and maintaining a dataset, increase the usability of the contributed dataset, as well as encourage attributions to different contributors of the dataset. Second, GLI is designed to curate a knowledge base, instead of a plain collection, of benchmark datasets. We use multiple sources of meta information to augment the benchmark datasets with \emph{rich characteristics}, so that they can be easily selected and used in downstream research or development. The source code of GLI is available at \url{https://github.com/Graph-Learning-Benchmarks/gli}.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
情感识别技术使计算机能够将人类情感状态分类为离散类别。但是,即使在短时间内,情绪也可能波动,而不是保持稳定状态。由于其3-D拓扑结构,也很难全面使用EEG空间分布。为了解决上述问题,我们在本研究中提出了一个本地时间空间模式学习图表网络(LTS-GAT)。在LTS-GAT中,使用划分和串扰方案来检查基于图形注意机制的脑电图模式的时间和空间维度的局部信息。添加了动力域歧视器,以提高针对脑电图统计数据的个体间变化的鲁棒性,以学习不同参与者的鲁棒性脑电图特征表示。我们在两个公共数据集上评估了LTS-GAT,用于在个人依赖和独立范式下进行情感计算研究。与其他现有主流方法相比,LTS-GAT模型的有效性被证明。此外,使用可视化方法来说明不同大脑区域和情绪识别的关系。同时,还对不同时间段的权重进行了可视化,以研究情绪稀疏问题。
translated by 谷歌翻译
大肠息肉分类是一项关键的临床检查。为了提高分类精度,大多数计算机辅助诊断算法通过采用窄带成像(NBI)识别结直肠息肉。但是,NBI通常在实际诊所场景中缺少利用率,因为该特定图像的获取需要在使用白光(WL)图像检测到息肉时手动切换光模式。为了避免上述情况,我们提出了一种新的方法,可以通过进行结构化的跨模式表示一致性直接实现准确的白光结肠镜图像分类。实际上,一对多模式图像,即NBI和WL,被送入共享变压器中以提取分层特征表示。然后,采用了一种新颖的设计空间注意模块(SAM)来计算从多层次的类令牌和贴片令牌%的相似性,以获得特定模态图像。通过将配对NBI和WL图像的类令牌和空间注意图对齐,变压器可以使上述两种模式保持全局和局部表示一致性。广泛的实验结果说明了所提出的方法的表现优于最近的研究,从而通过单个变压器实现了多模式预测,同时仅在使用WL图像时大大提高了分类精度。
translated by 谷歌翻译
无人战斗机(UCAV)的智能决定长期以来一直是一个具有挑战性的问题。传统的搜索方法几乎不能满足高动力学空战场景期间的实时需求。增强学习(RL)方法可以通过使用神经网络显着缩短决策时间。然而,稀疏奖励问题限制了其收敛速度,人工先前的经验奖励可以很容易地偏离其原始任务的最佳会聚方向,这对RL Air Confic应用程序产生了巨大的困难。在本文中,我们提出了一种基于同型的软演员 - 批评方法(HSAC),它专注于通过跟随具有稀疏奖励和具有人工事先经验奖励的原始任务和辅助任务之间的同谐话的同谐路径来解决这些问题。本文还证明了该方法的收敛性和可行性。为了确认我们的方法,我们为基于RL的方法培训构建了一个详细的3D空调仿真环境,我们在攻击水平飞行UCAV任务和自我播放对抗任务中实现了我们的方法。实验结果表明,我们的方法比仅利用稀疏奖励或人工事先经验奖励的方法更好地表现得更好。通过我们方法训练的代理人可以在攻击水平飞行中达到98.3%的胜利率,平均在面对由另外两种方法培训的代理商面临的胜利时平均67.4%。
translated by 谷歌翻译
To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2.0, focusing on the use of pre-trained language models (PLMs). To be comprehensive, our library covers $13$ common text generation tasks and their corresponding $83$ datasets and further incorporates $45$ PLMs covering general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight PLMs. We also implement $4$ efficient training strategies and provide $4$ generation objectives for pre-training new PLMs from scratch. To be unified, we design the interfaces to support the entire research pipeline (from data loading to training and evaluation), ensuring that each step can be fulfilled in a unified way. Despite the rich functionality, it is easy to use our library, either through the friendly Python API or command line. To validate the effectiveness of our library, we conduct extensive experiments and exemplify four types of research scenarios. The project is released at the link: https://github.com/RUCAIBox/TextBox.
translated by 谷歌翻译
Neural networks, especially the recent proposed neural operator models, are increasingly being used to find the solution operator of differential equations. Compared to traditional numerical solvers, they are much faster and more efficient in practical applications. However, one critical issue is that training neural operator models require large amount of ground truth data, which usually comes from the slow numerical solvers. In this paper, we propose a physics-guided data augmentation (PGDA) method to improve the accuracy and generalization of neural operator models. Training data is augmented naturally through the physical properties of differential equations such as linearity and translation. We demonstrate the advantage of PGDA on a variety of linear differential equations, showing that PGDA can improve the sample complexity and is robust to distributional shift.
translated by 谷歌翻译
Accurate polyp segmentation is of great importance for colorectal cancer diagnosis and treatment. However, due to the high cost of producing accurate mask annotations, existing polyp segmentation methods suffer from severe data shortage and impaired model generalization. Reversely, coarse polyp bounding box annotations are more accessible. Thus, in this paper, we propose a boosted BoxPolyp model to make full use of both accurate mask and extra coarse box annotations. In practice, box annotations are applied to alleviate the over-fitting issue of previous polyp segmentation models, which generate fine-grained polyp area through the iterative boosted segmentation model. To achieve this goal, a fusion filter sampling (FFS) module is firstly proposed to generate pixel-wise pseudo labels from box annotations with less noise, leading to significant performance improvements. Besides, considering the appearance consistency of the same polyp, an image consistency (IC) loss is designed. Such IC loss explicitly narrows the distance between features extracted by two different networks, which improves the robustness of the model. Note that our BoxPolyp is a plug-and-play model, which can be merged into any appealing backbone. Quantitative and qualitative experimental results on five challenging benchmarks confirm that our proposed model outperforms previous state-of-the-art methods by a large margin.
translated by 谷歌翻译
Prompt tuning has been employed as an efficient way to adapt large vision-language pre-trained models (e.g. CLIP) to various downstream tasks in data-limited or label-limited settings. Nonetheless, visual data (e.g., images) is by default prerequisite for learning prompts in existing methods. In this work, we advocate that the effectiveness of image-text contrastive learning in aligning the two modalities (for training CLIP) further makes it feasible to treat texts as images for prompt tuning and introduce TaI prompting. In contrast to the visual data, text descriptions are easy to collect, and their class labels can be directly derived. Particularly, we apply TaI prompting to multi-label image recognition, where sentences in the wild serve as alternatives to images for prompt tuning. Moreover, with TaI, double-grained prompt tuning (TaI-DPT) is further presented to extract both coarse-grained and fine-grained embeddings for enhancing the multi-label recognition performance. Experimental results show that our proposed TaI-DPT outperforms zero-shot CLIP by a large margin on multiple benchmarks, e.g., MS-COCO, VOC2007, and NUS-WIDE, while it can be combined with existing methods of prompting from images to improve recognition performance further. Code is released at https://github.com/guozix/TaI-DPT.
translated by 谷歌翻译
在过去几十年中,功能选择吸引了很多关注,因为它可以降低数据维度,同时保持功能的原始物理含义,这比功能提取可以更好地解释性。但是,大多数现有的功能选择方法,尤其是基于深度学习的方法,通常集中在仅具有很高分数的功能上,但忽略了那些在训练过程中得分较低的人以及重要的候选功能的顺序。这可能是有风险的,因为不幸的是,在培训过程中可能会忽略一些重要和相关的功能,从而导致次优的解决方案或误导性选择。在我们的工作中,我们通过利用较少重要性分数的功能来处理功能选择,并根据新颖的互补功能掩码提出功能选择框架。我们的方法是通用的,可以轻松地集成到现有的基于深度学习的特征选择方法中,以提高其性能。实验是在基准数据集上进行的,并表明所提出的方法可以选择比艺术状态更具代表性和信息性的特征。
translated by 谷歌翻译